Skip to content

Conversation

@beicause
Copy link
Contributor

@beicause beicause commented Jan 21, 2026

Objective

This PR combines #22495 and #21357, and proposes a way to fully expose main color targets as a step towards #19704

Solution

  1. MainColorTarget componet that specifies main_texture_a main_texture_b and main_texture_multisampled. And WithMainColorTarget that links the camera to a MainColorTarget entity, which can be shared by multi cameras.
  2. Cameras use separate main color targets by default, and main color targets are auto configured and synced with Hdr, Msaa and CameraMainTextureUsages by default, unless there is NoAutoConfiguredMainColorTarget to opt-out this.
  3. Msaa and CameraMainTextureUsages are no longer extracted and have no effect with NoAutoConfiguredMainColorTarget. In render world, users should query msaa samples and main texture format in ExtractedView, and main texture size in ExtractedCamera.
  4. Render passes render the entire main texture without setting a viewport.
  5. Msaa writeback is removed.

TODD:
Make sure everything works.
Implement optional texture writeback and input.

Update 2026-1-24:
Implemented color target input and render target, see example https://github.com/beicause/bevy/blob/configure-main-color-target/examples/3d/camera_color_target_graph.rs
Msaa and CameraMainTextureUsages are replaced by a single CameraMainColorTargetConfig.
WithMainColorTarget is self referential for auto configued camera.

Testing

Added a camera_color_target_graph example.

Modified `split_screen` example that demonstrates sharing main color targets:
//! Renders four cameras to the same window to accomplish "split screen".

use std::f32::consts::PI;

use bevy::{
    camera::{
        color_target::{MainColorTarget, NoAutoConfiguredMainColorTarget, WithMainColorTarget},
        Viewport,
    },
    light::CascadeShadowConfigBuilder,
    post_process::bloom::Bloom,
    prelude::*,
    window::WindowResized,
};
use bevy_asset::RenderAssetUsages;
use bevy_image::ToExtents;
use bevy_render::render_resource::{
    TextureDescriptor, TextureDimension, TextureFormat, TextureUsages,
};

fn main() {
    App::new()
        .add_plugins(DefaultPlugins)
        .add_systems(Startup, setup)
        .add_systems(Update, (set_camera_viewports, button_system))
        .run();
}

/// set up a simple 3D scene
fn setup(
    mut commands: Commands,
    asset_server: Res<AssetServer>,
    mut meshes: ResMut<Assets<Mesh>>,
    mut images: ResMut<Assets<Image>>,
    mut materials: ResMut<Assets<StandardMaterial>>,
) {
    // plane
    commands.spawn((
        Mesh3d(meshes.add(Plane3d::default().mesh().size(100.0, 100.0))),
        MeshMaterial3d(materials.add(Color::srgb(0.3, 2.0, 0.3))),
    ));

    commands.spawn(SceneRoot(
        asset_server.load(GltfAssetLabel::Scene(0).from_asset("models/animated/Fox.glb")),
    ));

    // Light
    commands.spawn((
        Transform::from_rotation(Quat::from_euler(EulerRot::ZYX, 0.0, 1.0, -PI / 4.)),
        DirectionalLight {
            shadow_maps_enabled: true,
            ..default()
        },
        CascadeShadowConfigBuilder {
            num_cascades: if cfg!(all(
                feature = "webgl2",
                target_arch = "wasm32",
                not(feature = "webgpu")
            )) {
                // Limited to 1 cascade in WebGL
                1
            } else {
                2
            },
            first_cascade_far_bound: 200.0,
            maximum_distance: 280.0,
            ..default()
        }
        .build(),
    ));

    let main_color_target = commands
        .spawn(MainColorTarget::new(
            images.add(Image {
                data: None,
                texture_descriptor: TextureDescriptor {
                    label: Some("main_texture_a"),
                    size: UVec2::new(1280, 720).to_extents(),
                    mip_level_count: 1,
                    sample_count: 1,
                    dimension: TextureDimension::D2,
                    format: TextureFormat::Rg11b10Ufloat,
                    usage: TextureUsages::RENDER_ATTACHMENT | TextureUsages::TEXTURE_BINDING,
                    view_formats: &[],
                },
                asset_usage: RenderAssetUsages::RENDER_WORLD,
                copy_on_resize: false,
                ..Default::default()
            }),
            Some(images.add(Image {
                data: None,
                texture_descriptor: TextureDescriptor {
                    label: Some("main_texture_b"),
                    size: UVec2::new(1280, 720).to_extents(),
                    mip_level_count: 1,
                    sample_count: 1,
                    dimension: TextureDimension::D2,
                    format: TextureFormat::Rg11b10Ufloat,
                    usage: TextureUsages::RENDER_ATTACHMENT | TextureUsages::TEXTURE_BINDING,
                    view_formats: &[],
                },
                asset_usage: RenderAssetUsages::RENDER_WORLD,
                copy_on_resize: false,
                ..Default::default()
            })),
            Some(images.add(Image {
                data: None,
                texture_descriptor: TextureDescriptor {
                    label: Some("main_texture_multisampled"),
                    size: UVec2::new(1280, 720).to_extents(),
                    mip_level_count: 1,
                    sample_count: 4, // MSAAx4
                    dimension: TextureDimension::D2,
                    format: TextureFormat::Rg11b10Ufloat,
                    usage: TextureUsages::RENDER_ATTACHMENT,
                    view_formats: &[],
                },
                asset_usage: RenderAssetUsages::RENDER_WORLD,
                copy_on_resize: false,
                ..Default::default()
            })),
        ))
        .id();

    // Cameras and their dedicated UI
    for (index, (camera_name, camera_pos)) in [
        ("Player 1", Vec3::new(0.0, 200.0, -150.0)),
        ("Player 2", Vec3::new(150.0, 150., 50.0)),
        ("Player 3", Vec3::new(100.0, 150., -150.0)),
        ("Player 4", Vec3::new(-100.0, 80., 150.0)),
    ]
    .iter()
    .enumerate()
    {
        let bundle = (
            Camera3d::default(),
            Transform::from_translation(*camera_pos).looking_at(Vec3::ZERO, Vec3::Y),
            Camera {
                // Renders cameras with different priorities to prevent ambiguities
                order: index as isize,
                ..default()
            },
            CameraPosition {
                pos: UVec2::new((index % 2) as u32, (index / 2) as u32),
            },
            NoAutoConfiguredMainColorTarget,
            WithMainColorTarget(main_color_target),
        );
        let camera = if index == 0 || index == 1 {
            commands.spawn((Bloom::NATURAL, bundle))
        } else {
            commands.spawn(bundle)
        }
        .id();

        // Set up UI
        commands.spawn((
            UiTargetCamera(camera),
            Node {
                width: percent(100),
                height: percent(100),
                ..default()
            },
            children![
                (
                    Text::new(*camera_name),
                    Node {
                        position_type: PositionType::Absolute,
                        top: px(12),
                        left: px(12),
                        ..default()
                    },
                ),
                buttons_panel(),
            ],
        ));
    }

    fn buttons_panel() -> impl Bundle {
        (
            Node {
                position_type: PositionType::Absolute,
                width: percent(100),
                height: percent(100),
                display: Display::Flex,
                flex_direction: FlexDirection::Row,
                justify_content: JustifyContent::SpaceBetween,
                align_items: AlignItems::Center,
                padding: UiRect::all(px(20)),
                ..default()
            },
            children![
                rotate_button("<", Direction::Left),
                rotate_button(">", Direction::Right),
            ],
        )
    }

    fn rotate_button(caption: &str, direction: Direction) -> impl Bundle {
        (
            RotateCamera(direction),
            Button,
            Node {
                width: px(40),
                height: px(40),
                border: UiRect::all(px(2)),
                justify_content: JustifyContent::Center,
                align_items: AlignItems::Center,
                ..default()
            },
            BorderColor::all(Color::WHITE),
            BackgroundColor(Color::srgb(0.25, 0.25, 0.25)),
            children![Text::new(caption)],
        )
    }
}

#[derive(Component)]
struct CameraPosition {
    pos: UVec2,
}

#[derive(Component)]
struct RotateCamera(Direction);

enum Direction {
    Left,
    Right,
}

fn set_camera_viewports(
    windows: Query<&Window>,
    mut window_resized_reader: MessageReader<WindowResized>,
    mut query: Query<(&CameraPosition, &mut Camera)>,
) {
    // We need to dynamically resize the camera's viewports whenever the window size changes
    // so then each camera always takes up half the screen.
    // A resize_event is sent when the window is first created, allowing us to reuse this system for initial setup.
    for window_resized in window_resized_reader.read() {
        let window = windows.get(window_resized.window).unwrap();
        let size = window.physical_size() / 2;

        for (camera_position, mut camera) in &mut query {
            camera.viewport = Some(Viewport {
                physical_position: camera_position.pos * size,
                physical_size: size,
                ..default()
            });
        }
    }
}

fn button_system(
    interaction_query: Query<
        (&Interaction, &ComputedUiTargetCamera, &RotateCamera),
        (Changed<Interaction>, With<Button>),
    >,
    mut camera_query: Query<&mut Transform, With<Camera>>,
) {
    for (interaction, computed_target, RotateCamera(direction)) in &interaction_query {
        if let Interaction::Pressed = *interaction {
            // Since TargetCamera propagates to the children, we can use it to find
            // which side of the screen the button is on.
            if let Some(mut camera_transform) = computed_target
                .get()
                .and_then(|camera| camera_query.get_mut(camera).ok())
            {
                let angle = match direction {
                    Direction::Left => -0.1,
                    Direction::Right => 0.1,
                };
                camera_transform.rotate_around(Vec3::ZERO, Quat::from_axis_angle(Vec3::Y, angle));
            }
        }
    }
}

@IceSentry IceSentry added the A-Rendering Drawing game state to the screen label Jan 21, 2026
Copy link
Member

@tychedelia tychedelia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not 100% sure here, but part of me wonders whether this shouldn't just be a single Image representing the "base" texture descriptor the other fields are derived from. To start, main_b must always be identical to main_a anyway and in the case that the base descriptor is multi-sampled can be derived from that.

Part of the difficulty here with using Handle<Image> is that if you ever wanted to actually use that handle for some purpose (like displaying it on an image node in another pass), there's no way to know which handle to use, since the "correct" handle cannot be determined in the main world.

This is also a way of asking the question: outside of potential ux drawbacks, is there any reason to have separate Hdr and Msaa components when we have the ability to fully specify the descritpro?

@beicause
Copy link
Contributor Author

beicause commented Jan 22, 2026

My idea is that since camera only depends these 4 color targets: main a, main b, multisampled, and resolve target, other functionalities can be achieved through custom render nodes, such as inputs (msaa writeback) and outputs (upscaling/RenderTarget). Given this it's better to fully expose them. Actually, I also want to expose the resolve target but currently ColorAttachment implies resolve target. (It seems fine to resolve msaa to main a/b by default, configuring resolve target is unnecessary since main a/b is unused).

Part of the difficulty here with using Handle is that if you ever wanted to actually use that handle for some purpose (like displaying it on an image node in another pass), there's no way to know which handle to use, since the "correct" handle cannot be determined in the main world.

I think it is possible to determine the texture in use in the main world through the AtomicUsize flag.

This is also a way of asking the question: outside of potential ux drawbacks, is there any reason to have separate Hdr and Msaa components when we have the ability to fully specify the descritpro?

Yeah, I don't think it's necessary to separate components like Msaa and MainTextureUsages anymore. But Hdr is still required; it determines whether to perform tonemapping and is independent of the texture format.

@alice-i-cecile alice-i-cecile added C-Feature A new feature, making something new possible D-Complex Quite challenging from either a design or technical perspective. Ask for help! X-Contentious There are nontrivial implications that should be thought through S-Waiting-on-Author The author needs to make changes or address concerns before this can be merged labels Jan 22, 2026
@github-actions
Copy link
Contributor

Your PR caused a change in the graphical output of an example or rendering test. This might be intentional, but it could also mean that something broke!
You can review it at https://pixel-eagle.com/project/B04F67C0-C054-4A6F-92EC-F599FEC2FD1D?filter=PR-22637

If it's expected, please add the M-Deliberate-Rendering-Change label.

If this change seems unrelated to your PR, you can consider updating your PR to target the latest main branch, either by rebasing or merging main into it.

@beicause beicause force-pushed the configure-main-color-target branch from f9e0a23 to 89c6cef Compare January 24, 2026 10:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-Rendering Drawing game state to the screen C-Feature A new feature, making something new possible D-Complex Quite challenging from either a design or technical perspective. Ask for help! S-Waiting-on-Author The author needs to make changes or address concerns before this can be merged X-Contentious There are nontrivial implications that should be thought through

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

4 participants